[slimtensor] Add from_etensor factory function for ETensor to SlimTensor conversion#16551
Conversation
…sor conversion Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16551
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit f1fb291 with merge base 944a436 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…sor conversion Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) ghstack-source-id: 333060891 Pull Request resolved: #16551
This PR needs a
|
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * #16447 * #16446 * __->__ #16724 Copy CUDAGuard and CUDAStreamGuard from cuda/runtime/ to aoti/slim/cuda/ to support slimtensor requirement while get rid of potential circular dependency: - cuda_backend/main_functionalities -> aoti/slimtensor -> cuda_backend/cuda_guard This change: - copy guard.h, guard.cpp and test files from backend/cuda_backend to backend/aoti/slim/cuda/ Differential Revision: [D91056808](https://our.internmc.facebook.com/intern/diff/D91056808/)
…v2 (#16446) Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * #16447 * __->__ #16446 * #16724 Add SlimTensor-based implementations of AOTI shim functions for tensor creation: 1. `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning SlimTensor that wraps existing memory using the `from_blob()` factory Both functions support CPU and CUDA devices and handle all 7 SlimTensor dtypes. Also add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim implementations for working on new API while not impact the current pipeline. Will use memory_slim.{h/cpp} to replace current memory.{h/cpp} when everything has been set up. Differential Revision: [D90126247](https://our.internmc.facebook.com/intern/diff/D90126247/)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #16565 * #16551 * #16469 * #16457 * #16455 * #16454 * #16453 * #16452 * #16451 * #16450 * #16449 * #16448 * __->__ #16447 * #16446 * #16724 Add SlimTensor-based implementations of AOTI shim functions for tensor creation: `aoti_torch_create_tensor_from_blob_v2()` - Creates a non-owning SlimTensor that wraps existing memory using the `from_blob()` factory Both functions support CPU and CUDA devices and handle all 7 SlimTensor dtypes. Changes: - Add `memory_slim.h` and `memory_slim.cpp` with SlimTensor-based shim implementations - Add `runtime_shims_slim` library target to TARGETS with `CUDA_AVAILABLE=1` preprocessor flag - Add `cuda_shim_slim_cpp_unittest()` function for SlimTensor test targets Differential Revision: [D90126244](https://our.internmc.facebook.com/intern/diff/D90126244/)
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
…sor conversion Pull Request resolved: #16551 Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads ghstack-source-id: 336360657 @exported-using-ghexport Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
…sor conversion Pull Request resolved: #16551 Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads ghstack-source-id: 336512812 @exported-using-ghexport Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
…sor conversion Pull Request resolved: #16551 Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads ghstack-source-id: 336530258 @exported-using-ghexport Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
…sor conversion Pull Request resolved: #16551 Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads ghstack-source-id: 336538672 @exported-using-ghexport Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/)
… to SlimTensor conversion" Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device. Key features: - Handles int32_t to int64_t conversion for sizes/strides (ETensor uses int32_t, SlimTensor uses int64_t) - Supports CPU and CUDA target devices via storage()->copy_() - Preserves tensor strides (non-contiguous layouts) - Provides both reference and pointer overloads Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) [ghstack-poisoned]
f7c0c8d
into
gh/gasoonjia/100/base
…sor conversion (#16996) This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #16551 by @Gasoonjia ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/100/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/100/head Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/99/orig Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/100/orig Differential Revision: [D90539554](https://our.internmc.facebook.com/intern/diff/D90539554/) @diff-train-skip-merge --------- Co-authored-by: gasoonjia <gasoonjia@icloud.com>
Stack from ghstack (oldest at bottom):
Add from_etensor() factory function that creates a SlimTensor from an ExecuTorch portable tensor (ETensor), copying data to a target device.
Key features:
Differential Revision: D90539554